11 research outputs found

    TacIPC: Intersection- and Inversion-free FEM-based Elastomer Simulation For Optical Tactile Sensors

    Full text link
    Tactile perception stands as a critical sensory modality for human interaction with the environment. Among various tactile sensor techniques, optical sensor-based approaches have gained traction, notably for producing high-resolution tactile images. This work explores gel elastomer deformation simulation through a physics-based approach. While previous works in this direction usually adopt the explicit material point method (MPM), which has certain limitations in force simulation and rendering, we adopt the finite element method (FEM) and address the challenges in penetration and mesh distortion with incremental potential contact (IPC) method. As a result, we present a simulator named TacIPC, which can ensure numerically stable simulations while accommodating direct rendering and friction modeling. To evaluate TacIPC, we conduct three tasks: pseudo-image quality assessment, deformed geometry estimation, and marker displacement prediction. These tasks show its superior efficacy in reducing the sim-to-real gap. Our method can also seamlessly integrate with existing simulators. More experiments and videos can be found in the supplementary materials and on the website: https://sites.google.com/view/tac-ipc

    Proxy-based sliding-mode tracking control of dielectric elastomer actuators through eliminating rate-dependent viscoelasticity

    Get PDF
    This work was partially supported by the State Key Laboratory of Mechanical Transmissions (SKLMT-ZDKFKT-202004) and the National Natural Science Foundation of China (52005322 and 52025057).Peer reviewedPostprin

    Low-Cost Exoskeletons for Learning Whole-Arm Manipulation in the Wild

    Full text link
    While humans can use parts of their arms other than the hands for manipulations like gathering and supporting, whether robots can effectively learn and perform the same type of operations remains relatively unexplored. As these manipulations require joint-level control to regulate the complete poses of the robots, we develop AirExo, a low-cost, adaptable, and portable dual-arm exoskeleton, for teleoperation and demonstration collection. As collecting teleoperated data is expensive and time-consuming, we further leverage AirExo to collect cheap in-the-wild demonstrations at scale. Under our in-the-wild learning framework, we show that with only 3 minutes of the teleoperated demonstrations, augmented by diverse and extensive in-the-wild data collected by AirExo, robots can learn a policy that is comparable to or even better than one learned from teleoperated demonstrations lasting over 20 minutes. Experiments demonstrate that our approach enables the model to learn a more general and robust policy across the various stages of the task, enhancing the success rates in task completion even with the presence of disturbances. Project website: https://airexo.github.io/Comment: Project page: https://airexo.github.io

    An efficient 3D extraction and reconstruction method for myelinated axons of mouse cerebral cortex based on mixed intelligence

    No full text
    Accurate reconstruction of the 3D morphology and spatial distribution of myelinated axons in mouse brains is very important for understanding the mechanism and dynamic behavior of long-distance information transmission between brain regions. However, it is difficult to segment and reconstruct myelinated axons automatically due to two reasons: the amount of it is huge and the morphology of it is different between brain regions. Traditional artificial labeling methods usually require a large amount of manpower to label each myelinated axon slice by slice, which greatly hinders the development of the mouse brain connectome. In order to solve this problem and improve the reconstruction efficiency, this paper proposes an annotation generation method that takes the myelinated axon as prior knowledge, which can greatly reduce the manual labeling time while reaching the level of manual labeling. This method consists of three steps. Firstly, the 3D axis equation of myelinated axons is established by sparse axon artificial center point labels on slices, and the region to be segmented is pre-extracted according to the 3D axis. Subsequently, the U-Net network was trained by a small number of artificially labeled myelinated axons and was used for precise segmentation of output by the last step, so as to obtain accurate axon 2D morphology. Finally, based on the segmentation results, the high-precision 3D reconstruction of axons was performed by imaris software, and the spatial distribution of myelinated axons in the mouse brain was reconstructed. In this paper, the effectiveness of this method was verified on the dataset of high-resolution X-ray microtomography of the mouse cortex. Experiments show that this method can achieve an average MIoU 81.57, and the efficiency can be improved by more than 1400x compared with the manual labeling method

    An efficient 3D extraction and reconstruction method for myelinated axons of mouse cerebral cortex based on mixed intelligence

    Get PDF
    Accurate reconstruction of the 3D morphology and spatial distribution of myelinated axons in mouse brains is very important for understanding the mechanism and dynamic behavior of long-distance information transmission between brain regions. However, it is difficult to segment and reconstruct myelinated axons automatically due to two reasons: the amount of it is huge and the morphology of it is different between brain regions. Traditional artificial labeling methods usually require a large amount of manpower to label each myelinated axon slice by slice, which greatly hinders the development of the mouse brain connectome. In order to solve this problem and improve the reconstruction efficiency, this paper proposes an annotation generation method that takes the myelinated axon as prior knowledge, which can greatly reduce the manual labeling time while reaching the level of manual labeling. This method consists of three steps. Firstly, the 3D axis equation of myelinated axons is established by sparse axon artificial center point labels on slices, and the region to be segmented is pre-extracted according to the 3D axis. Subsequently, the U-Net network was trained by a small number of artificially labeled myelinated axons and was used for precise segmentation of output by the last step, so as to obtain accurate axon 2D morphology. Finally, based on the segmentation results, the high-precision 3D reconstruction of axons was performed by imaris software, and the spatial distribution of myelinated axons in the mouse brain was reconstructed. In this paper, the effectiveness of this method was verified on the dataset of high-resolution X-ray microtomography of the mouse cortex. Experiments show that this method can achieve an average MIoU 81.57, and the efficiency can be improved by more than 1400x compared with the manual labeling method

    Fast Object Detection in Light Field Imaging by Integrating Deep Learning with Defocusing

    No full text
    Although four-dimensional (4D) light field imaging has many advantages over traditional two-dimensional (2D) imaging, its high computation cost often hinders the application of this technique in many fields, such as object detection and tracking. This paper presents a hybrid method to accelerate the object detection in light field imaging by integrating the deep learning with the depth estimation algorithm. The method takes full advantage of computation imaging of the light field to generate an all-in-focus image, a series of focal stacks, and multi-view images at the same time, and convolutional neural network and defocusing are consequently used to perform initial detection of the objects in three-dimensional (3D) space. The estimated depths of the detected objects are further optimized based on multi-baseline super-resolution stereo matching while efficiency is maintained, as well by compressing the searching space of the disparity. Experimental studies are conducted to demonstrate the effectiveness of the proposed method

    High-Precision Anti-Interference Control of Direct Drive Components

    No full text
    This study presents a compound control algorithm that enhances the servo accuracy and disturbance suppression capability of direct drive components (DDCs). The servo performance of DDCs is easily affected by external disturbance and the deterioration of assembly characteristics due to a lack of deceleration device. The purpose of this study is to compensate for the impact of external and internal disturbances on the system. First, a linear state space model of the system is established. Second, we analyzed the main factors restricting the performance of DDCs which includes sensor noise, friction and external disturbance. Then, a fractional-order proportional integral (FOPI) controller was used to eliminate the steady-state error caused by the time-invariable disturbance which can also improve the system’s anti-interference capability. A state-augmented Kalman filter (SAKF) was proposed to suppress the quantization noise and compensate for the time-varying disturbances simultaneously. The effectiveness of the proposed compound algorithm was demonstrated by comparative experiments, demonstrating a maximum 89.34% improvement. The experimental results show that, compared with the traditional PI controller, the FOPISAKF controller can not only improve the tracking accuracy of the system, but also enhance the disturbance suppression ability
    corecore